Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Few-shot object detection based on attention mechanism and secondary reweighting of meta-features
Runchao LIN, Rong HUANG, Aihua DONG
Journal of Computer Applications    2022, 42 (10): 3025-3032.   DOI: 10.11772/j.issn.1001-9081.2021091571
Abstract373)   HTML17)    PDF (2381KB)(186)       Save

In the few-shot object detection task based on transfer learning, due to the lack of attention mechanism to focus on the object to be detected in the image, the ability of the existing models to suppress the surrounding background area of the object is not strong, and in the process of transfer learning, it is usually necessary to fine-tune the meta-features to achieve cross-domain sharing, which will cause meta-feature shift, and lead to the decline of the model’s ability to detect large-sample images. To solve the above problems, an improved meta-feature transfer model Up-YOLOv3 based on the attention mechanism and the meta-feature secondary reweighting mechanism was proposed. Firstly, the Convolution Block Attention Module (CBAM)-based attention mechanism was introduced in the original meta-feature transfer model Base-YOLOv2, so that the feature extraction network was able to focus on the object area in the image and pay attention to the detailed features of the image object class, thereby improving the model’s detection performance for few-shot image objects. Then, the Squeeze and Excitation-Secondary Meta-Feature Reweighting (SE-SMFR) module was introduced to reweight the meta-features of the large-sample image for the second time in order to obtain the secondary reweighted meta-features, so that the model was not only able to improve the performance of few-shot object detection, but also able to reduce the weight shift of the meta-feature information of the large-sample image. Experimental results on PASCAL VOC2007/2012 dataset show that, compared with Base-YOLOv2, Up-YOLOv3 has the detection mean Average Precision (mAP) for few-shot object images increased by 2.3 to 9.1 percentage points; compared with the original meta-feature transfer model based on YOLOv3 Base-YOLOv3, mAP for large-sample object images increased by 1.8 to 2.4 percentage points. It can be seen that the improved model has good generalization ability and robustness for both large-sample images and few-shot images of different classes.

Table and Figures | Reference | Related Articles | Metrics